10 research outputs found

    CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization

    Get PDF
    Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p

    Separating Lentiviral Vector Injection and Induction of Gene Expression in Time, Does Not Prevent an Immune Response to rtTA in Rats

    Get PDF
    BACKGROUND: Lentiviral gene transfer can provide long-term expression of therapeutic genes such as erythropoietin. Because overexpression of erythropoietin can be toxic, regulated expression is needed. Doxycycline inducible vectors can regulate expression of therapeutic transgenes efficiently. However, because they express an immunogenic transactivator (rtTA), their utility for gene therapy is limited. In addition to immunogenic proteins that are expressed from inducible vectors, injection of the vector itself is likely to elicit an immune response because viral capsid proteins will induce "danger signals" that trigger an innate response and recruit inflammatory cells. METHODOLOGY AND PRINCIPAL FINDINGS: We have developed an autoregulatory lentiviral vector in which basal expression of rtTA is very low. This enabled us to temporally separate the injection of virus and the expression of the therapeutic gene and rtTA. Wistar rats were injected with an autoregulatory rat erythropoietin expression vector. Two or six weeks after injection, erythropoietin expression was induced by doxycycline. This resulted in an increase of the hematocrit, irrespective of the timing of the induction. However, most rats only responded once to doxycycline administration. Antibodies against rtTA were detected in the early and late induction groups. CONCLUSIONS: Our results suggest that, even when viral vector capsid proteins have disappeared, expression of foreign proteins in muscle will lead to an immune respons

    Environmental Sound Recognition on Embedded Systems: From FPGAs to TPUs

    No full text
    In recent years, Environmental Sound Recognition (ESR) has become a relevant capability for urban monitoring applications. The techniques for automated sound recognition often rely on machine learning approaches, which have increased in complexity in order to achieve higher accuracy. Nonetheless, such machine learning techniques often have to be deployed on resource and power-constrained embedded devices, which has become a challenge with the adoption of deep learning approaches based on Convolutional Neural Networks (CNNs). Field-Programmable Gate Arrays (FPGAs) are power efficient and highly suitable for computationally intensive algorithms like CNNs. By fully exploiting their parallel nature, they have the potential to accelerate the inference time as compared to other embedded devices. Similarly, dedicated architectures to accelerate Artificial Intelligence (AI) such as Tensor Processing Units (TPUs) promise to deliver high accuracy while achieving high performance. In this work, we evaluate existing tool flows to deploy CNN models on FPGAs as well as on TPU platforms. We propose and adjust several CNN-based sound classifiers to be embedded on such hardware accelerators. The results demonstrate the maturity of the existing tools and how FPGAs can be exploited to outperform TPUs

    M3-AC: A Multi-Mode Multithread SoC FPGA Based Acoustic Camera

    No full text
    Acoustic cameras allow the visualization of sound sources using microphone arrays and beamforming techniques. The required computational power increases with the number of microphones in the array, the acoustic images resolution, and in particular, when targeting real-time. Such a constraint limits the use of acoustic cameras in many wireless sensor network applications (surveillance, industrial monitoring, etc.). In this paper, we propose a multi-mode System-on-Chip (SoC) Field-Programmable Gate Arrays (FPGA) architecture capable to satisfy the high computational demand while providing wireless communication for remote control and monitoring. This architecture produces real-time acoustic images of 240 × 180 resolution scalable to 640 × 480 by exploiting the multithreading capabilities of the hard-core processor. Furthermore, timing cost for different operational modes and for different resolutions are investigated to maintain a real time system under Wireless Sensor Networks constraints

    M3-AC: A Multi-Mode Multithread SoC FPGA Based Acoustic Camera

    No full text
    Acoustic cameras allow the visualization of sound sources using microphone arrays and beamforming techniques. The required computational power increases with the number of microphones in the array, the acoustic images resolution, and in particular, when targeting real-time. Such a constraint limits the use of acoustic cameras in many wireless sensor network applications (surveillance, industrial monitoring, etc.). In this paper, we propose a multi-mode System-on-Chip (SoC) Field-Programmable Gate Arrays (FPGA) architecture capable to satisfy the high computational demand while providing wireless communication for remote control and monitoring. This architecture produces real-time acoustic images of 240 × 180 resolution scalable to 640 × 480 by exploiting the multithreading capabilities of the hard-core processor. Furthermore, timing cost for different operational modes and for different resolutions are investigated to maintain a real time system under Wireless Sensor Networks constraints

    Autonomous Wireless Sensor Networks in an IPM Spatial Decision Support System

    No full text
    Until recently data acquisition in integrated pest management (IPM) relied on manual collection of both pest and environmental data. Autonomous wireless sensor networks (WSN) are providing a way forward by reducing the need for manual offload and maintenance; however, there is still a significant gap in pest management using WSN with most applications failing to provide a low-cost, autonomous monitoring system that can operate in remote areas. In this study, we investigate the feasibility of implementing a reliable, fully independent, low-power WSN that will provide high-resolution, near-real-time input to a spatial decision support system (SDSS), capturing the small-scale heterogeneity needed for intelligent IPM. The WSN hosts a dual-uplink taking advantage of both satellite and terrestrial communication. A set of tests were conducted to assess metrics such as signal strength, data transmission and bandwidth of the SatCom module as well as mesh configuration, energetic autonomy, point to point communication and data loss of the WSN nodes. Finally, we demonstrate the SDSS output from two vector models forced by WSN data from a field site in Belgium. We believe that this system can be a cost-effective solution for intelligent IPM in remote areas where there is no reliable terrestrial connection

    Autonomous Wireless Sensor Networks in automated IPM

    No full text
    Up to recently data acquisition in Integrated Pest Management (IPM) relied on manual collection of pest and meteorological data. Automated Wireless Sensor Networks (WSN) are providing a way forward by reducing the need for manual offload and maintenance and providing an independent system that captures the small-scale heterogeneity needed for smart IPM applications. Efficient local monitoring or surveillance is essential to prevent the spread and establishment of pests. In this study we investigate the feasibility of implementing a fully independent, low-power WSN that will provide high-resolution, near-real time environmental data as input to pest control systems. The gateway is equipped with a smart dual uplink allowing both satellite communication and terrestrial communication depending on deployment conditions. In order to evaluate the Quality of Service that the developed network can provide in an IPM context, a set of tests were conducted to assess metrics such as signal strength, data transmission and bandwidth of the Satellite Communication module as well as mesh configuration, energetic autonomy, point to point communication and data loss of the WSN nodes. We believe that this system can be a cost-effective solution for smart IPM in remote areas where there is no reliable terrestrial connection

    MosAIc: A Classical Machine Learning Multi-Classifier Based Approach against Deep Learning Classifiers for Embedded Sound Classification

    No full text
    Environmental Sound Recognition has become a relevant application for smart cities. Such an application, however, demands the use of trained machine learning classifiers in order to categorize a limited set of audio categories. Although classical machine learning solutions have been proposed in the past, most of the latest solutions that have been proposed toward automated and accurate sound classification are based on a deep learning approach. Deep learning models tend to be large, which can be problematic when considering that sound classifiers often have to be embedded in resource constrained devices. In this paper, a classical machine learning based classifier called MosAIc, and a lighter Convolutional Neural Network model for environmental sound recognition, are proposed to directly compete in terms of accuracy with the latest deep learning solutions. Both approaches are evaluated in an embedded system in order to identify the key parameters when placing such applications on constrained devices. The experimental results show that classical machine learning classifiers can be combined to achieve similar results to deep learning models, and even outperform them in accuracy. The cost, however, is a larger classification time

    XCycles Backprojection Acoustic Super-Resolution

    No full text
    The computer vision community has paid much attention to the development of visible image super-resolution (SR) using deep neural networks (DNNs) and has achieved impressive results. The advancement of non-visible light sensors, such as acoustic imaging sensors, has attracted much attention, as they allow people to visualize the intensity of sound waves beyond the visible spectrum. However, because of the limitations imposed on acquiring acoustic data, new methods for improving the resolution of the acoustic images are necessary. At this time, there is no acoustic imaging dataset designed for the SR problem. This work proposed a novel backprojection model architecture for the acoustic image super-resolution problem, together with Acoustic Map Imaging VUB-ULB Dataset (AMIVU). The dataset provides large simulated and real captured images at different resolutions. The proposed XCycles BackProjection model (XCBP), in contrast to the feedforward model approach, fully uses the iterative correction procedure in each cycle to reconstruct the residual error correction for the encoded features in both low- and high-resolution space. The proposed approach was evaluated on the dataset and showed high outperformance compared to the classical interpolation operators and to the recent feedforward state-of-the-art models. It also contributed to a drastically reduced sub-sampling error produced during the data acquisition.info:eu-repo/semantics/publishe

    Convergent synthesis of dendrimers based on 1,3,3-trisubstituted 2-oxindoles

    No full text
    Dendrons and dendrimers were convergently prepared using an isatin as AB2 monomer by superelectrophilic arylation in trifluoromethanesulfonic acid. This strategy has the advantage that incomplete reactions of the AB2 monomer are minimized, thus simplifying purification. As the obtained dendrons/dendrimers are analogues of the hyperbranched polymers with a degree of branching of 100% developed earlier in our group, an opportunity is created to compare the latter with their structurally perfect counterparts
    corecore